Exercises of week 4 are based on data from MASS library. Data Boston consists of housing Values in suburbs of Boston, and more information from the data can be found here. Data contains information for example crime rates which will be analyzed more in detail.
library(MASS) #reading the library
data("Boston") #reading data
str(Boston) # 506 obs. of 14 variables
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston) # 506 rows and 14 columns
## [1] 506 14
pairs(Boston) ## plotting matrix of the variables
From the pairs plot we can see the distributions and relationships between the variables. Because there are so many variables the picture is not so clear.
Next I will look at the correlations between variables in the data.
library(corrplot)
## corrplot 0.84 loaded
library(tidyverse)
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v ggplot2 3.3.2 v purrr 0.3.4
## v tibble 3.0.4 v dplyr 1.0.2
## v tidyr 1.1.2 v stringr 1.4.0
## v readr 1.4.0 v forcats 0.5.0
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
## x dplyr::select() masks MASS::select()
cor_matrix <-cor(Boston) %>% round(digits=2) #calculating correlation matrix
cor_matrix #printing the matrix
## crim zn indus chas nox rm age dis rad tax ptratio
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51
## black lstat medv
## crim -0.39 0.46 -0.39
## zn 0.18 -0.41 0.36
## indus -0.36 0.60 -0.48
## chas 0.05 -0.05 0.18
## nox -0.38 0.59 -0.43
## rm 0.13 -0.61 0.70
## age -0.27 0.60 -0.38
## dis 0.29 -0.50 0.25
## rad -0.44 0.49 -0.38
## tax -0.44 0.54 -0.47
## ptratio -0.18 0.37 -0.51
## black 1.00 -0.37 0.33
## lstat -0.37 1.00 -0.74
## medv 0.33 -0.74 1.00
Correlations plot:
corrplot(cor_matrix, method="circle",type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6 )
From the correlation plot, we can see that there are high positive and negative correlations between certain variables in the data. From the correlation matrix above we can see also the numerical correlations between variables.
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
From the summary table we can see the mean and median values of variables. Also min and max values and quartiles are visible.
Standardize the dataset and print out summaries of the scaled data. How did the variables change? Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). Use the quartiles as the break points in the categorical variable. Drop the old crime rate variable from the dataset. Divide the dataset to train and test sets, so that 80% of the data belongs to the train set. (0-2 points)
Next I will center and standardize the variables by function scale
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
If we now look at the summary table, we can see that the values have changed. All of the mean values are 0, and almost all of the median, 1st Qu, and min values are negative.
boston_scaled <- as.data.frame(boston_scaled) #changing the object to data frame
class(boston_scaled)
## [1] "data.frame"
Next I will create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate).
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
bins <- quantile(boston_scaled$crim) #creating a quantile vector of crim and printing it
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# creating a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low", "med_low", "med_high", "high"))
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
boston_scaled <- dplyr::select(boston_scaled, -crim)# removing original crim from the dataset
boston_scaled <- data.frame(boston_scaled, crime)# adding the new categorical value to scaled data
str(boston_scaled) #looking that everything worked
## 'data.frame': 506 obs. of 14 variables:
## $ zn : num 0.285 -0.487 -0.487 -0.487 -0.487 ...
## $ indus : num -1.287 -0.593 -0.593 -1.306 -1.306 ...
## $ chas : num -0.272 -0.272 -0.272 -0.272 -0.272 ...
## $ nox : num -0.144 -0.74 -0.74 -0.834 -0.834 ...
## $ rm : num 0.413 0.194 1.281 1.015 1.227 ...
## $ age : num -0.12 0.367 -0.266 -0.809 -0.511 ...
## $ dis : num 0.14 0.557 0.557 1.077 1.077 ...
## $ rad : num -0.982 -0.867 -0.867 -0.752 -0.752 ...
## $ tax : num -0.666 -0.986 -0.986 -1.105 -1.105 ...
## $ ptratio: num -1.458 -0.303 -0.303 0.113 0.113 ...
## $ black : num 0.441 0.441 0.396 0.416 0.441 ...
## $ lstat : num -1.074 -0.492 -1.208 -1.36 -1.025 ...
## $ medv : num 0.16 -0.101 1.323 1.182 1.486 ...
## $ crime : Factor w/ 4 levels "low","med_low",..: 1 1 1 1 1 1 2 2 2 2 ...
Next I will divide the dataset to train and test sets, so that 80% of the data belongs to the train set.
n <- nrow(boston_scaled)# number of rows in the Boston dataset
ind <- sample(n, size = n * 0.8)# choosing randomly 80% of the rows
train <- boston_scaled[ind,] # creating train set containing 80 % of the data
test <- boston_scaled[-ind,] # creating test set containing 20 % of the data
Next I will fit the linear discriminant analysis on the train set. I will use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables.
lda.fit <- lda(crime ~., data = train) # linear discriminant analysis
lda.fit # print the lda.fit object
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2400990 0.2574257 0.2500000 0.2524752
##
## Group means:
## zn indus chas nox rm age
## low 0.9349989 -0.9340350 -0.109974419 -0.8717619 0.40288933 -0.8817135
## med_low -0.1058810 -0.2417528 -0.007331936 -0.5651173 -0.12707950 -0.3475990
## med_high -0.4023350 0.2134165 0.195445218 0.4661624 -0.04641872 0.4145341
## high -0.4872402 1.0171096 -0.002135914 1.0764482 -0.34114341 0.7904963
## dis rad tax ptratio black lstat
## low 0.8560124 -0.6764026 -0.7483441 -0.40327083 0.38385860 -0.7722106
## med_low 0.3494809 -0.5500918 -0.4598202 -0.04652584 0.30754012 -0.1566894
## med_high -0.4157672 -0.4190087 -0.3097095 -0.29959318 0.05864284 0.1069822
## high -0.8393948 1.6382099 1.5141140 0.78087177 -0.68991502 0.8941351
## medv
## low 0.51418896
## med_low -0.01328999
## med_high 0.04315452
## high -0.64507330
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.09122829 0.66665268 -0.96202543
## indus 0.05802113 -0.23892754 0.53596350
## chas -0.07809839 -0.05534393 0.11547559
## nox 0.33367634 -0.70242001 -1.39041153
## rm -0.10388361 -0.05848535 -0.02553381
## age 0.22078324 -0.33149639 -0.07032695
## dis -0.09344412 -0.20024390 0.35607716
## rad 3.22338975 0.87579229 -0.14157562
## tax -0.06982094 0.06840261 0.64996947
## ptratio 0.11159391 0.05749545 -0.30643711
## black -0.10036931 0.01368226 0.09409443
## lstat 0.21216968 -0.17785315 0.24532524
## medv 0.19201862 -0.26006836 -0.26695304
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9460 0.0397 0.0143
I will draw the LDA (bi)plot, but first we need the function for lda biplot arrows.
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime) # target classes as numeric
plot(lda.fit, dimen = 2, col = classes, pch = classes)# plotting the lda results
lda.arrows(lda.fit, myscale = 2) #adding the arrows
Plot looks different than the one that I did in the DataCamp exercise, but because R takes randomly 80 % of the Boston data to the train data, this 80% might be now different than in DataCamp.
Next I will save the crime categories from the test set and then remove the categorical crime variable from the test dataset. Then I will predict the classes with the LDA model on the test data.
correct_classes <- test$crime # saving the correct classes from test data
test <- dplyr::select(test, -crime) # removing the crime variable from test data
lda.pred <- predict(lda.fit, newdata = test) # predicting classes with test data
table(correct = correct_classes, predicted = lda.pred$class) # cross tabulating the results
## predicted
## correct low med_low med_high high
## low 17 10 3 0
## med_low 3 14 5 0
## med_high 0 6 18 1
## high 0 0 0 25
If we look at the prediction results, we can see that it works quite well. Especially in high there is only 1 observation which is predicted as med_high. In other classes about 50 % of the predicted observations are correct.
Reload the Boston dataset and standardize the dataset (we did not do this in the Datacamp exercises, but you should scale the variables to get comparable distances).
data(Boston)#reloading the Boston dataset
boston_scaled <- scale(Boston) #scaling
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
boston_scaled <- as.data.frame(boston_scaled) #changing the object to data frame
class(boston_scaled)
## [1] "data.frame"
Next I will calculate the distances between the observations and run k-means algorithm on the dataset.
dist_eu <- dist(boston_scaled) # euclidean distance matrix
summary(dist_eu) #summary of the distances
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
dist_man <- dist(boston_scaled, method = 'manhattan')# manhattan distance matrix
summary(dist_man)#summary of the distances
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
km <-kmeans(boston_scaled, centers = 3) # k-means clustering
pairs(boston_scaled[1:6], col = km$cluster)# plotting the scaled Boston dataset with clusters
pairs(boston_scaled[7:14], col = km$cluster)# perform plotting in two parts so that plots are easier to look at
Next I will investigate what is the optimal number of clusters.
library(ggplot2)
set.seed(123)
k_max <- 15 #determining the number of clusters
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss}) # calculate the total within sum of squares
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
If we look at the twcss-plot, there is no huge drop in total wcss, but it looks like 5 clusters might be optimal.
I will now do the clustering again
km <-kmeans(boston_scaled, centers = 5) # now 5 centers
pairs(boston_scaled[1:6], col = km$cluster)# plotting the scaled Boston dataset with clusters
pairs(boston_scaled[7:14], col = km$cluster)# perform plotting in two parts so that plots are easier to look at
From the plots above, we can see the clustering in different colours. Some clusters are very clearly separated, but in other cases clusters are on top on each other. So I am not quite sure what would be the most optimal number of clusters in this case.
Super-Bonus: Run the code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)
dim(model_predictors)# check the dimensions
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
I installed package: install.packages(“plotly”)
library(plotly)#accessing library
##
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
Let´s create a 3D plot of the columns of the matrix product
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
Adjust the code: add argument color as a argument in the plot_ly() function. Set the color to be the crime classes of the train set. Draw another 3D plot where the color is defined by the clusters of the k-means. How do the plots differ? Are there any similarities? (0-3 points to compensate any loss of points from the above exercises)